An Inquiry/Review by Frederick David Abraham on

 

“Human electroencephalograms seen as fractal time series: Mathematical analysis and visualization”, by Vladimir Kulish, Alexei Sourin, Olga Sourina (Nanyang Technological University, Singapore), Computers in Biology and Medicine, 36, (2006), 291-302.

 

Abstract

 

Terry Marks-Tarlow mentioned this article on CHAOPSYC, the discussion list server at UVM owned by David Houston, Bob Porter, and me and shared with the Society for Chaos Theory in Psychology and the Life Sciences (and open to all who observe proper netiquette). I inquired about this paper because I wondered if the brain measures that she mentioned might have the temporal-spatial resolution necessary to answer the kinds of questions on cognition with which she was concerned. The article offers innovational data-analytical and graphic methods. This review represents my inquiry into the issues involved. It was a challenge due to my mathematical limitations, and should be considered as raising questions, not as an authoritative critique. I suggest ways to develop its usefulness based on my own ideas and on those of Sprott.

 

Both the fractal statistical and the EEG-visualization parts of this paper are highly innovative and useful. As research reporting they are incomplete due to the paper being mainly a methodological paper. There is some detail missing which I presume it due mainly to editorial policies of the journal regarding space availability. I was unfamiliar with both aspects of the paper, and found the review/inquiry a valuable learning experience, and present my investigation of it as a motivation to others that might be interested in developing or using such tools. To my thinking, this paper is highly sophisticated and valuable.

 

Table of Contents

 

1.  Introduction

2.  Mathematical Analysis

3.  Experimental Technique and Signal Processing

4.  Fractal Spectra of EEGs

5.  Visualization

6.  My Wish List

7.  References

 

1.  Introduction

 

According to the authors, the goal was for “developing new methods of processing data recorded by well-established techniques [that] may prove useful while deeper penetrating into the mystery of human consciousness.” Since the brain is non-linear and FFT Power spectral analyses are linear, non-linear measures should prove useful. They also suggest the visualization of EEG measurements with topographic representation of the head. Which is reminiscent of the early efforts of EEG visualization: W.G.Walter & H. W. Shipton, A new toposcopic display system. EEG Clin. Neurophsiol., 1951, 3, 281-292.

 

Note that it is reasonable to consider that the frequency resolution is nonetheless useful, and that multiple analytic methods might supplement the information to be extracted from the signal(s)? For example, in R. Abraham & C. Shaw (Dynamics: The Geometry of Behavior), they have a figure comparing the representations of various attractors with three images each: the portrait, the time series, and the frequency spectrum (Part 2, Fig. 4.5.7; repeated in Abraham, Abraham, & Shaw, Fig. II-49. These books also show the relationship to characteristic exponents aka Liapunov exponents, but not to fractal dimension, the subject of this paper). As Abarbanel (1996) puts it, “Since chaotic motion produces continuous broadband Fourier spectra, we clearly have to replace narrowband Fourier signatures with other characteristics of the system for purposes of identification and classification.” P.69. Then he mentions fractal dimensions and Lyapunov exponents as the two main candidates as classifiers.

 

2.  Mathematical Analysis

 

At first I wondered why they were starting with the Rényi entropy measure, which does not use sequential data, but rather is the probability distribution function of the time signal (i.e., the EEG voltages). Then I saw where they went with it, which was to the generalized fractal dimension, Dq, the convolution over the probability distributions of order q summed over the bins of the EEG voltages and the size of the hypersphere, δV (which is equivalent to r or ε of the usual formulations of D since there is a measured dimension, voltage, associated with r). This is their equation 6 (Sprott, §13.2.2, eqn. 13.14, p. 338; Kant & Schrieber, §11.3.1 eqn. 11.12, p. 187; Ott, Sauer, Yorke, §2.2, eqn. 2.2, p. 15; Abarbanel, §5.2, eqn. 5.12, p. 73, Renyi, 1970; Grassberger, 1983; Hentschel & Procaccia,1983; Paladin & Vulpiani, 1987.) The resulting Dq as a function of q, is sometimes called the fractal spectrum, also, the generalized Renyi entropy, generalized fractal dimensions, generalized dimension, the spectrum of fractional dimensions, multifractal spectrum of dimensions, and multifractal. They also note three special cases of Dq when q = 0 (box counting, capacity dimension, q=1  (information dimension), and q = 2 (correlation dimension.

 

 
 

 

 

 

 

 

 

 

 


This section establishes the high level of sophistication of the authors, and their intention to use the little used method (the multifractal spectrum) for finding aspects of the data that reduction to a single measure of D might not yield, a method used by only a few before (Ameral et al., 2001, on heartbeat dynamics; Solé & Manrubia, 1995, on rainforests; Mureika, Cupchik, & Dyer, 2004, on art, and others—see Sprott, p. 336.). They also point to their intention to use special visualization techniques with a topological display on a model head. I might suggest that traditional attractor reconstruction methods could be a useful visualization technique as an adjunct.

 

Finally, for comments re this section, of the four books I have mentioned, any one of them presents a more than adequate presentation of the methodologies involved, but each has strengths that warrant owning all of these books. The Abarbanel (Analysis of Observed Chaotic Data, 1997; a recent message from Amazon informs of a second edition with 2 co-authors) is excellent for data visualization, especially on explaining the box counting method with figures for the Hénon attractor (reproduced in my article in the first issue of NDPLS, 1997). The Kantz & Scheiber (Nonlinear Time Series Analysis, 1997) is especially good in pointing out the pitfalls when applying the techniques (§6.4), when automated algorithmic methods work well, and when the art of looking at the graphics of intermediate steps is needed in an analysis. They also caution about the of loss of information in trying to characterize a dynamic by reduction to single numbers—e.g., p. 38 on visual inspection of data; p. 72 on independence from measurement and analysis parameters. They are also good on stressing the use of multiple approaches to data analyses. They have an appendix containing some of their numerical routines in both Fortran and C. The Ott et al. (Coping with Chaos, 1994) has unique organization by having general chapters clearly explaining techniques, but also includes many reprints of classic original papers, saving trips to the library and web, a must-have reference work from that perspective. Sprott (Chaos and Time-Series Analysis, 2003) is excellent for (a) clear explanations, (b) completeness, (c) appendices including a huge list with summaries of known attractors, (d) historical vignettes on many the authors in the history of dynamics, and (e) a web site that helps with exercises and which keeps the book constantly updated. I have yet to see Robert Gregson’s and Lenny Smith’s latest books, and look forward to them. The Thompson & Stewart that I have mentioned before is stronger on dynamical theory than on data analysis. I have not seen the Strogatz, but suspect it is also terrific.

 

 

3.  Experimental technique and signal processing.

 

I was surprised to see EEG of 10-15 mV p/p; sure beats the days when I did EEG from indwelling electrodes (theirs was scalp), which might explain why they could be so casual with respect to shielding and isolation of the subjects, and with respect to grounding issues (not enough information but I assume they had no problem with ground loops). Mine (indwelling, cat) were more in the range reported by Babloyantz (1988, Estimation of Correlation Dimensions from Single and Multichannel Recordings, (in Başar & Bullock, Eds., Brain Dynamaics, vol 2, pp. 122-130, see Fig. 2 showing typical ranges of EEG voltages; also reprinted in Başar, 1990, Chaos in Brain Function.) At any rate, their signals looked pretty clean. I was curious about their putting cotton balls behind the ears. Could that have been for EOG rather than ECG? A misprint?  Statistical processing and “the significance test was performed”. Which tests and on what?

 

 

From Babloyantz (1988).

 

4.  Fractal spectra of EEGs.

 

Considering the virtuosity of experiments with monkeys and single unit recordings (Fuster, many others) in choice experiments, one gets curious about the timing of the EEG sample with respect to the question asked the subjects, such as before, during, and after the answer. Presumably, these EEG samples were taken during the utterance, although the 5 sec EEG sample is considerably longer than the utterance of the answer would take.

 

They state that the Fourier transform does not yield information about amplitude, but it does provide amplitude information in a relative sense at least (power per frequency band; when in arbitrary units), and can be true in the absolute sense of the distribution of V2/δf as a function of f if calibrations are made which reveal the transfer function of the measuring system (sans electrode/brain interface in our work (Abraham, F. D. Brown, D., & Gardiner, M. (1968). Calibration of EEG power spectra. Communications in Behavioral Biology, 1, 31-36. Hard to believe it had never been done before. We did it with a train of rectangular pulses whose convolution yielded the whole spectrum with a single signal.) While they accuse the Fourier of also not yielding fractionality information, it is just as true that while the fractal dimension is sensitive to frequency information, it does not yield any frequency information directly. Is it hard to believe that if D discriminated between the responses, the Fourier would also not yield a difference if performed with appropriate parametric choices? These are minor points and not very consequential considering the important methods that they are developing.

 

While the EEG voltages may have been large compared to EEGs reported earlier, the opposite is true for the fractal dimensions. After Grassberger & Procaccia first reported their algorithms for measuring D2 in 1983, there were a spate of articles reporting D2 and attractor reconstructions for EEG, mostly with D2s of 4-6 compared to the <2 values reported in Kulish (the Başar books just mentioned are full of them, e.g., Başar’s introduction in Chaos in Brain Function, Chaotic Dynamics and Resonance Phenomena in Brain Function: Progress, Perspectives, and Thoughts, Table 2. Why might this be? One can only speculate, since there were not many details of the analysis procedures given. But perhaps we can get a clue from Kantz & Scheriber, §6.3, where they point out that calculating the correlation sum and dimension involve attractor reconstruction, which involves the delay imbedding procedure (where calculating the delay τ is critical); “A value which “yields a convincing phase portrait should do for the correlation sum as well.”  Also, results should be “invariant under reasonable changes to the embedding procedure.” (p. 73.)

 

“Once the embedding vectors are reconstructed, the estimation of dimension is done in two steps,” determining the correlation sum, C(m,ε), where m is the embedding dimension, ε (=r of others), the radius of the covering hypersphere) and then “inspect C(m, ε) for signatures of self-similarity. If these signatures are convincing enough, we can compute a value for the dimension. Both steps require some care in order to avoid wrong or misleading results.” (p. 73.)

 

“The straightforward estimator, Eqn. (6.1), turns out to be biased toward too small dimensions when the pairs entering the sum are not statistically independent. For time series data with nonzero autocorrelations, independence cannot be assumed. . . The most important temporal correlations are caused by the fact that data close in time are also close in space” [For the EEG in Kulish et al., the state space is based on voltage].  “It is more than likely that the majority of dimension estimates published for field measurements are seriously too low because they mistake temporal coherence for geometrical structure.” (Kantz & Schreiber, p. 73; demonstrated by Theiler, 1986).

 

The solution involves decimating the time series to eliminate the correlations. Kantz & Schreiber’s next section, 6.4, shows how C(ε) as a function of ε and d(ε ) as a function of ε with parameter m is examined for linear portions (those signatures of self-similarity). The finite nature of the attractor and differing densities of points at different portions of the attractor affect the statistical reliability.  Sprott summarizes some 11 steps or procedures for the analysis of time series data (Sprott, §13.8; summarized here in §6 below). Many of these procedures are not mentioned or utilized or are incompletely described in the Kulish paper, so no estimate can be made if their procedures satisfy these precautions. Nonetheless, the use of the generalized D is an extremely important contribution.

 

One might remember that in reconstruction work, a usual criterion for selecting a lag, τ, is to use the delay required for the autocorrelation to decay from 1 to 0.

 

Their main results, the fractal sprectra, for one subject are in their Fig.3, and averaged over all Ss (sorry APA manual; I’m old fashioned, and you can’t hide the realities), substantially identical, are in their Fig. 4, just below. They certainly are well behaved. The curves for “yes” answers are higher than for “no” answers, as is D0 (Hausdorff-Besicovitch version). In addition, the range of D’s from D-∞ to D+∞ was also greater for “yes” than “no” answers. Following the latter result, they state, “Hence, the ‘YES’ spectra are on the average more fractal than the ‘NO’ spectra. This implies that the brain is on the average more active while answering ‘YES’ question. In addition, it is evident from Fig. 4 that on the average, more unexpected values are present in the ‘YES’ signal, whereas the ‘NO’ signal is on the average more predictable and uniform.”

The conclusion that the spectra are greater for the “yes” answers certainly seems safe, even without error curves. The conclusion that the brain is more active is not necessarily wrong, but requires some conjecturing to have any meaning. It is consistent with the typical results, e.g., the Figure 2 of Babloyantz (1988/1990; see supra) which shows D2 as a function of voltage for disease (epilepsy, Creutzfeld-Jakob), sleep (stages 2, 4, and REM), and eyes closed and open, with D2s increasing from about 2 to 10 (as voltage ranges decrease). A clue to interpreting such a result if given by Babloyantz: “The synchronization between neurons which occur in pathologies is reflected by high amplitudes and low dimensions.” (see caption of her figure.) That is, it is not necessarily more or less activity, but the coherence of neuronal activity that is being broken down into subpopulations of neurons and involved in more information processing tasks occurring at any time (also, see Abraham, 1997; Abraham et al., 1973 on coherence and brain transactions). Why would “yes” involve such an increase in information processing in the brain. While it is impossible to tell from this experiment, or to conjecture with any confidence (i.e., regarding the involvement of greater emotion, less security, more information to evaluate cognitively), it would be of value to use signal-detection theoretical methodology and interrogation of the research participants to try to understand emotional and cognitive strategies.

 

What do we make of the fact that the range of D2 is greater for “yes” than “no”? Probably not much. The increase was due to the negative end of the spectra (where q < 0) which is more sensitive to the less dense portions of the attractor, which is where the more extreme voltages are. Mutlifractals are designed to overcome problems of committing to a particular D due to the variability in the density (probability of occupancy) of the attractor. Sprott (2003, p. 338): “The limit [of Dq as] q approaches +∞ gives the local dimension in the most densely populated region of the attractor, and the limit q approaches –∞ gives the local dimension in the least densely populated region. The former can be calculated more accurately than the latter because there are usually many more points in the denser region (which is how it got to be dense!).”

 

The “yes” signal has more unexpected values; the “no” more predictable. Do we need Dq to tell us that? Not only is it obvious from the time signal (EEG), but wouldn’t a simple frequency (probability) distribution and/or its statistical moments tell us the same? Doesn’t this result tell us as much about D than D does about predictability? Of course the big question is: does signal predictability reflect cognitive predictability? This is an interesting question that should be amenable to further experimentation.

 

Kulish et al. comment that the information dimension, D1, higher for “yes” than “no”, is unexpected since they should have “equal informational content”. However it is not necessarily the case that the cognitive and emotional requirements (nor the brain processes required) for the two answers should be the same, and in fact, the fact that the whole spectrum (not just D1), is different for the two answers would seem to indicate that to be the case. But with the respect to the EEG signal, D1, which equals the limit of the Shannon entropy [Sprott: “the amount of information required to specify the state of the system”] divided by log δV as δV approaches 0, which as Sprott observes, “describes how fast the information needed to specify a point on the attractor increases as r [δV in this paper] decreases.” (Sprott, 203, p. 339.)  [In the above paragraphs, the word ‘approaches’ is used instead of an arrow due to limitations of the MS html editor’s fonts or symbol translator.]

 

Kulish et al. make a further conjecture that “this result [unequal D1s] can be viewed as an indirect proof of the fact that the operation of the brain is fuzzy, that is there is an overlap between the set in question (“YES”) and its complement (“NO”)”. Very indirect!! Fuzzy?! What can this mean? While yes and no may be complementary as far as the language goes, the expectation that completely different brain processes might support them is an unlikely hypothesis. Suppose the brain was divided into two sets of neurons that only talked to neurons in their own set. Might we not have achieved the same result? And why, as they claim, should the D1s add up to 2 bits as they claim? [D1,yes = 0.921 bits; D1,no = 0.853 bits; Σ = 1.774 bits, whereas, if independent the sum should be 2 bits according to Kulish et al. I am a bit confused as I didn’t think the limits defining D, even the information dimension D1 had units or dimension of their own, so this argument escapes me; but I am on unsure ground on this.] Babloyantz “eyes closed” and eyes open” conditions each had D2s (which must be very close to D1 as the Dq curves all have the same shape, and adjacent Dqs are very similar) >2. There likely is an overlap in the processing of information in making the binary decision, but it is hard for me to see how it follows from the non-additivity of D1s of the EEGs for the two responses. In a final observation about the fractal results, Kulish et al. point out that the Dq curves are nearly identical to that for the logistic equation (but they don’t specify the parameter of the logistic).

 

It may be relevant that Sprott shows nearly identical curves for the logistic equation (he specifies the logistic equation’s constant for different multifractals and approximates the logistic with an asymmetrical Cantor set, (§13.4.2, Fig. 13.11, pp. 342-243) and for a pair of asymmetric Cantor sets (§13.3.1, Fig. 13.8, pp.340-341.) [One might note, a la the Kulish argument, that these curves do share the same brain, that is the same equation except for control parameters, with the sum of D1s in the neighborhood of 1.]

 

 

Sprott: Figure 13.8, Generalized dimension for two different asymmetric Cantor sets.        (Thanks to Sprott for sending an electronic copy of this figure.)

 

The fact that two identical deterministic difference equations deferring only in the choice of their parameters yield similar curves whose difference can be specified with but a few parameters (amplitude and range of Dq, D0, slope of the tangent at the inflection point at D0) does not imply that two real-world processes, such as the cognitive ability to answer ‘yes’ or ‘no’ to questions, are generated by either similar or different processes. Nor does it tell you whether even the processes are deterministic or stochastic (where the difference equations for the latter would have to have a probabilistic component). It does mean that the dimensional magnitude of one is greater than the other. If deterministic equations represent similar processes, in what way are they fuzzy other than that they shared some components of a process but not others? Because an attractor is chaotic or possesses fractal dimension and can be characterized with Shannon entropy neither makes nor disproves the probabilistic case. For those who wish to involve stochastic resonance, specific theories would have to built on the brain/cognitive processes involved in which noise contributes to hill climbing from the basin of one attractor into that of another, and I suspect such attempts would likely to be rather simplifications or the several cognitive/emotional processes involved.

 

5.  Visualization

 

This is a really exciting methodology, and its potential use in psychophysical and psychological research and medical diagnosis and therapy is vast. I am not current on what other similar methodologies might be around, but this one looks sophisticated to me.

 

In brief, the magnitude of EEG is evaluated at each point on the skull, which can be tracked in real time with a visualization by means of ‘blobbies’, partial spherical forms on a mannequin head, by means of a graphics computer technology by an award winning distinguished team of biomedical engineers, so I will trust their algorithms even though I cannot follow them all adequately.

 

The “blobby” is defined by equation 12, p. 298:

 

“where a is a scale factor, b is an exponent scale factor and g is a threshold value.”

 

The index i is over the EEG channels. x,y,z were coordinates in a 3D Cartesian space around each electrode position, and I presume when unindexed, some sort of average. The distance of the electrode to the reference position being ri. The voltages enter via b, and are thus the factor that affects the size of the blobby. More than one blobby can be shown at a time to compare different aspects of an experiment and different loci of activity and their magnitude. The threshold g requires a level of EEG activity before a blobby is visualized for any given electrode location. A blobby is calculated for a given time position, but this can be advanced for real time movies or snapshots for particular time windows, advancing through the 5 sec of the response window.

 

I asked the authors for some clarifications:

 

 

Sourin/Abraham correspondence 15-December-2006

 

Alexei: Dear Fred,

 

Thank you for your interest in our project.

 

With reference to your questions about the visualization technique used in

 our work[, you asked]:

 

Fred: Since there was no use of an index for time, I essentially asked how

they dealt with time (f cranks out a point for each point xi,yi.zi and time t).

 

Alexei: Time t can be any value within the given range of time for each

experiment. If we need to make an animated sequence, we define start, stop

and increment values of time. To compare different experiments we position

the resulting time-dependent models so that the experiment starts

synchronously.

 

Fred: Is x without an index a mean over some set from which xi is drawn?

 

Alexei: x, y, z are the Cartesian coordinates of sampled points within a

volume including the shape being modeled (head + changing shapes).

Sampling of points is performed by the rendering algorithm (so-called

Marching Cubes) and their values and number of them depend on the

precision of the representation selected as well as on the particular rendering

model used (blobs, meatball, softobjects). In other words, x,y,z can be

coordinates of any point in 3D space for which we calculate the function

value to answer the question whether this point belongs to the shape or not.

 

Fred: Would it also not depend on the exponent factor ri?

 

Alexei: Values bi(t) are provided in the input data set for each channel while

values ri are functions of the coordinates of the electrodes and they

are calculated for each point being tested.

 

Alexei adds: You can find more information on the shape modeling method

used in this work in http://www.ntu.edu.sg/home/assourin/Frep.htm

 

Best Regards, Alexei Sourin

 

 
 

 

 

 

 

 

 

 

 

 

 



Their web site gives the general nature, with color graphic renditions, of morphing of shapes but no further information regarding the computational algorithmsThegraphs of the results are clear, and exceptionally clever.

 

Here is an example of the first and second .77 sec epochs during an experiment, showing blobbies superimposed on the head for ‘yes’ and ‘no’ responses, the color difference surviving as shades of gray here and in the article. It nicely shows the reduced area of cortical activation (it became more so over the ensuing 6 epochs). (Notice the original 256 Hz digitization rates are decimated in half.) Interactive computer windows and palettes allow real time changing of both viewing and computational parameters. Boolian operations allow additional experimental evaluations.

 

 

From Figure 6, p. 300.

 

They conclude that EEG activity is greater for ‘yes’ responses, but that “no” responses require less mental activity and is more stressful due assumed from its lesser cortical involvement but greater persistence in the visual cortex.

 

Despite the sophistication of the visualization methods, I think there are other measurements in addition to EEG amplitude including power spectra and co-spectra, and especially EEG coherence measures, and most especially, some of the parameters of the generalized fractal spectra that could be fed into it that might help with more subtle cognitive interpretations. Additionally, I would love to see more complete results for the experiment, and think that any conclusions based on such a brief methodological report would be very exciting when used in a more extensive and rigorous experimental context.

 

6.  My wish list

 

a)     When unusual statistical and geometrical representations are used, whether fractal dimensions or blobbies, for which the probability distribution functions and their higher moments (variance, skewness, kurtosis) may not be estimated, it is often helpful to evaluate their reliability and significance with Monte Carlo methods. They would constitute a welcome addition to the present study. (Abraham et al., 1973; Abraham, 1997).

 

b)    It is nice to have calibration of EEG signals and their derivative measures demonstrated (Abraham, Brown, & Gardiner, 1968).

 

c)     Utilizing as many of Sprott’s 11 steps or procedures for the analysis of time series data as possible (quoted or précis from Sprott, 2003  §13,8, pp. 348-349:

 

                                                  i.      Make sure the data are free of errors.

                                                ii.      Test for stationarity.

                                              iii.      Explore a variety of plotting the data.

                                             iv.      Determine the correlation time or minimum of the autocorrelation or mutual information to optimize the sampling rate.

                                               v.      Check the autocorrelation function or fourier spectra for periodicities.

                                             vi.      Make a time-space plot to be sure there are enough data.

                                           vii.      Use false nearest neighbor or saturation to establish the embedding dimension in the determination of D2.

                                         viii.      If the embedding dimension is low, determine D2.

                                              ix.      If there is a low dimensional attractor, compute Lyapunov exponents, entropy, and growth rate of unpredictability. If high, remove noise.

                                                x.      If there is chaos, use surrogate methods for the above tests.

                                              xi.      If there is low dimensional chaos, construct equations and make short-term predictions. If high dimensional chaos (more common), “some predictability [maybe] is possible, and whose power spectrum and probability distribution allow comparison with theoretical models.”

 

d)    Considering the subtlety, multiplicity, and complexity of transactions within the brain, and the largely unknown nature of them for the subtleties of cognitive processes, it is not surprising that the best of our analytic techniques are frustratingly inadequate at revealing those subtleties. I am particularly thinking of the relationships which may be taking place between different areas of cortical (and subcortical when indwelling electrodes permit), so when blobbies indicated more than one active area, it could prove of interest to see if the EEG in those areas would show some covariance/coherence (Abraham, 1997; Abraham et al., 1973 for power spectra, but could be done with with bi,j here). And in fact, all pair-wise sets of electrode results could be funneled into a discriminant or other canonical correlational analysis. And especially, these analyses could be fed into another round of blobby movies.

 

e)     When this paper sequed to the visualization I thought they were going to visualize the fractal spectrum, or at least D0, D1, or D2, or defining parameters of Dq, so I put that on my wish list also. If you put all these together, blobbies for D, voltage, and paired covariance of voltage, along with Monte Carlo and surrogate methods, and the rest of Sprott’s steps, then the methods so cleverly developed here would more definitely realize their potential.

 

f)      And finally, some refinement of reporting of experimental procedures, such as the temporal indication of stimuli and response with the EEG recording would prove a benefit. Someone taking advantage of all of these would then be on the threshold of the evolution of an exciting experimental program.

 

The authors are to be congratulated and thanked for developing these tools for the rest of us to use. I am certainly appreciative of the opportunity to learn as much as possible from this exercise of trying to understand their work, despite my own limitations. And I certainly also thank Terry Marks-Tarlow for sharing her interest in this paper. I submit it here in hopes that others will become interested in this work and that of other authors cited, and will help me develop further my own understanding of it.

 

7.  References

 

Abarbanel, H.D.I.  (1996). Analysis of observed chaotic data. New York:

   Springer-Verlag.

 

Abraham, F.D. (1997). Nonlinear coherence in multivariate research:

   Invariants and the reconstruction of attractors.

   Nonlinear Dynamics, Psychology & Life Science, 1, 7-34.

 

Abraham, F.D., Abraham, & Shaw, C.D. (1990). A visual introduction to

   dynamical systems theory in psychology. Santa Cruz: Aerial.

 

Abraham, F. D. Brown, D., & Gardiner, M. (1968). Calibration of EEG

   power spectra. Communications in Behavioral Biology, 1, 31-36.

 

Abraham, F. D. Bryant, H., Mettler, M., Bergerson, W., Moore, F.,

   Maderdrut, J., Gardiner, M., Walter, D., & Jennrich, R (1973).

   Spectrum and discriminant analyses reveal remote rather than local

   sources for hypothalamic EEG: Could waves affect unit activity?

   Brain Research, 49, 349-366.

 

Abraham, R.H., & Shaw, C.D. (1988). Dynamics: The Geometry of

   Behavior, Part 2. Santa Cruz: Aerial.

 

Amaral,  L.A.N., Ivanov, P.C., Aoyagi, N., Hidaka, I., Tomono, S.,

   Goldgerger, A.L., Stanley, H.E., & Yamamoto, Y. (2001). Behavioral-

   independent features of complex heartbeat dynamics.

   Physical Review Letters, 86, 6026-6029.

 

Babloyantz, A. (1988/1990). Estimation of correlation dimensions from

   single and multichannel recordings, (in E. Başar & T.H. Bullock,

   (Eds.),Brain dynamaics, vol 2, pp. 122-130; also reprinted in E. Başar

   (Ed.), 1990, Chaos in brain function.) Berlin, Heidelberg:

   Springer-Verlag.

 

Grassberger, P. (1983). Generalized dimensions of strange attractors.

   Physics Letters A, 97, 227-230.

 

Grassberger, P., & Procaccia, I. (1983). Measuring the strangeness of

   strange attractors. Phsica D, 9, 189-208.

 

Hentschel, H.G.E., & Procaccia, I. (1983). The infinite number of

   generalized dimensions of fratals and stange attractors.

   Physica D, 8, 435-844.

 

Kantz, H., & Schreiber, T. (1997). Nonlinear time series analysis.

   Cambridge: Cambridge.

 

Kulish V., Sourin, A. Sourina, O. (2006). Computers in Biology and

   Medicine, 36, 291-302.

 

Mureika, J.R., Cupchik, G.C., & Dyer, C.C. (2004). Multifractal fingerprints
in the visual arts.
Leonardo, 37(1), 53-56. Also: 17 May 2005,
http://arxiv.org/abs/physics/0505117.

 

Ott, E., Sauer, T., & Yorke, J.A. (1994). Coping with chaos: Analysis of

   chaotic data and the exploitation of chaotic systems. New York: Wiley.

 

Paladin, G., & Vulpiani, A. (1987). Anomalous scaling laws in multifractal

   objects. Physics Reports, Alous scaling laws in multifractal objects.

 

Physics Reports, 156, 147-225>, 147-225.

 

Renyi, A. (1970). Probability theory. Amsterdam: North-Holland.

 

Solé, R.V., & Manrubia, S.C. (1995). Self-similarity in rain forests:

   Evidence for a critical state. Physical Review E, 51, 6250-6253.

 

Sprott, J.C. (2003). Chaos and time-series analysis. Oxford: Oxford.

 

Theiler, J. (1986). Spurious dimension from correlation algorithms applied

   to limited time series data. Physical Review A, 34, 2427-2432.

 

Walter, W.G., & Shipton, H.W. (1951), A new toposcopic display system.

   EEG Clin. Neurophsiol., 3, 281-292.

 

Submitted March 8, 2007

 

© Frederick David Abraham or the authors cited where appropriate. My ideas, if any prove worthy of consideration, may be used freely for academic use and internet intercourse with proper acknowledgment and citation.